Preparing for the Image Literate Decade
نویسنده
چکیده
The utility of digital over traditional imaging methods in terms of data delivery, access, and manipulation are undeniable and well recognized. Data literacy in such digital matters is well established. What is not yet developed, but slowly emerging, is an accompanying image literacy; the ability to measure, test, and visually recognize good images from bad ones, based on project requirements Leading practitioners are realizing that there are significant additional responsibilities that come with the adoption of digital imaging. Not the least of these is for the control of the performance variability that comes with the freedom of system component selection. Currently several initiatives being developed by national libraries, institutions and funding organizations will directly influence clients’ expectations. We describe how US and European initiatives will influence the requirements for both imaging performance, and how this will be managed in digital conversion projects. We interpret these developments in terms of the necessary tools and methods for quantifying and maintaining performance consistency. Rather than presenting a list of requirements for, e.g., image effective resolution, distortion, toneand color reproduction, we present a way to establish an imaging quality-assurance program. The elements of a successful program should include; establishing of performance goals, efficient test plans and performance tracking tools, and interpretation for corrective action. Introduction In the last few years, several initiatives aimed at improving both the efficiency and quality of imaging practice for digital conversion projects have been developed. In this paper we report on how progress in this area can be understood in the context of corresponding quality assurance efforts in manufacturing industries. We will also see how national and international imaging practice guidelines are having an influence on the expectations of both service providers and cultural institutions. The adoption of digital imaging technologies for content delivery, access, and manipulation is well-recognized, and almost universal. What is not always recognized is that the very choices and variety of system hardware and software components can lead to variable quality of the imaging results. A good working knowledge of such matters, what we call image literacy, is needed by both institutions and internal or external imaging service providers. What are being developed are techniques and tools which facilitate the measurement, testing, and visual evaluations to identify of areas for improvement of digital imaging content. As tools and educational resources become more available, leading practitioners are realizing that there are significant additional responsibilities that come with the adoption of digital imaging. Not the least of these is to control the increased performance variability that comes along with the freedom to choose between hardware, software and image manipulation components of the acquisition system. Unlike the world of analog imaging, where one could confidently rely on the history-rich reputation of a few manufacturers for imaging performance integrity and consistency, today’s digital imaging landscape offers fewer assurances. Fortunately, there is a gradual awakening to literate imaging through international standards, education, and appropriately prepared imaging specifications. Manufacturers and service providers should expect to be increasingly challenged by clients with respect to imaging performance and consistency. We adapt a Scottish definition of literacy (and Numeracy ) as it applies to digital imaging; Image literacy (n): The ability to read, interpret and use generally accepted imaging results, to handle the corresponding performance information, to express ideas and opinions, to make decisions and solve related problems. This definition is especially appropriate because it articulates a move away from the colloquial, and frequently confusing, imaging terms and practices towards standardized imaging measurement protocols. They are easily communicated and facilitate sound, economical and appropriate image digitizing decisions, by the numbers. Such literacy has been advocated in the past by several authors. Lessons on digital capture specsmanship were presented by Williams 2003 . This was followed by more general policy papers by Stelmach and Murray, 4 who made a case for quality control and quality assurance in digitizing workflows. Puglia et al. provided guidelines in 2004, consistent with the above developments. Two Dutch initiatives reduce several of these ideas to practice in imaging requirements, not just guidelines, in the Metamorfoze 6 effort, and for projects for the Nationaal Archief, Sound and Vision, and Film Museum Institutes. A rational imaging understanding fueled by sound technical backing is beginning to prevail and will likely continue to emerge over the next decade. Image literacy will be more widely enabled on a number of fronts. It will be motivated by a need for simple and consistent imaging where collection content and expected image usage will be matched to technical requirements for image acquisition. The enablement will be provided through 1) educational and training resources, 2) efficient measurement and quality control tools, and 3) a willingness to apply these diligently. While some service providers and device manufacturers may view the added requirements as a burden, the more competent among them will welcome such literacy as a way to distinguish their services from the less worthy. Content providers too should be aware that using the knowledge that this approach provides will allow them to better understand the prices that service providers and device manufacturers quote for demanding imaging tasks. Proc. IS&T Archiving Conf., pg. 124-127, IS&T, 2009 Organizing the Idioms In our proposed definition of literacy, the reading and writing of imaging is fundamental. Having the advantage of offering classes and training on digital imaging performance, we have concluded that eliminating ambiguous communication is the first and most important step in creating solid image literacy. For instance, confusion continues to exist between image sampling and optical resolution. Dynamic range is still specified in terms of the number of encoding bits/pixel. And there is wide confusion around the unusual forms of image ‘noise’ that manifest themselves in digital imaging. Fig.1: Portion of Imaging Performance Framework Just as the Swedish botanist, Carolus Linnaeus, proposed a botanical taxonomy to organize plant names, we provide one for imaging performance evaluation. The purpose is more than just a nomenclature translator, or glossary. It is a hierarchical framework for understanding the landscape of digital capture performance and its related standards, be they sanctioned or de facto. The fundamental classes are Signal and Noise. For each of these we identify primary imaging performance measures. These primary measures for signal capture attributes are the Opto-Electronic Conversion Function (OECF) and Spatial Frequency Response (SFR). Similarly, noise is classified as a distortion, being either spatial or radiometric in nature. From these four divisions more commonly used terms such as resolution, gamma, fixed pattern noise, or color misregistration are related. A graphical description of a portion of this framework is provided in Fig. 1. A full description of the taxonomy can be found in a companion paper. 7 The objective of the framework is to indicate the relationship between common imaging performance measures and methods. We do this with an eye to the development of practical, economical, standard approaches that can simplify communication and facilitate negotiation in this area. The framework also associates true performance metric names with the vernacular surrogates. For instance, qualitative terms like soft, blurred, aberration or focus are all colloquial terms used for describing image resolution and the appearance of sharpness. Similarly, haloing, unsharp masking, and edge enhancement are all generic terms for describing sharpening operations. Both can be understood and evaluated using a standard spatial frequency response (SFR) evaluation as indicated in Fig. 1. Accuracy, Precision and Calibration The control and improvement of digital imaging content requires that we observe and understand the important characteristics of our image acquisition process. Adopting the terminology of statistics, we can think of a measurement as an estimate of an underlying parameter. For example, when we compute the ‘average value’ (sample mean) from several observations, we are estimating the true mean value of the process being observed. A measurement whose observations are centered on the true value, as in Fig. 2a, are said to be accurate. However, if the observed data are closely grouped, as in Fig. 2b, the measurement is said be precise. a. b. Figure 2: Two types of measurement variability; a. indicates high accuracy but low precision, b shows low accuracy but high precision Naturally, we would prefer having measurements of imaging performance with high accuracy and precision. Given the choice of either situation a or b of Fig. 2, however, selection b. is often preferable if it implies that the error in the average measurement is predictable. A predictable error, or bias, can often be corrected as part of a measurement and analysis system. This correction is a form of calibration. An analysis step that seeks to improve measurement accuracy is a form of calibration. The concept of a correctable bias can also be applied directly to the content of digital images. In one sense, a digital image is itself is a form of measurement of an object or documents. The physical characteristics being ‘measured’ by the discrete pixel values can be expressed in terms of the reflected or transmitted light, as selected by an image detector sensitive to particular wavelengths of light. When we observe (measure) that a digital image is ‘inaccurate’, we often attempt to remove bias in the data, by image editing. The experienced practitioner of imaging performance evaluation is aware that there are limits to calibration. Just as in image editing, the data may also have low precision, and large Fo un da tio n A ttr ib ut e Signal
منابع مشابه
Rethinking Literate Programming in Statistics
Literate programming is becoming increasingly trendy for data analysis because it allows generating dynamic analysis reports for communicating the data analysis and eliminates the chance of untraceable human errors in analysis reports. Traditionally, literate programming includes two separate processes for compiling the code and preparing the documentation. In this article I argue that while th...
متن کاملبررسی صحت و دقت چند روش تهیه نقشه شکل های فرسایش خاک
Erosion features map is one of the basic maps in erosion and sediment studies considered important in watershed management programs. For preparing soil erosion features map (1:250000 scale), a study was conducted in Jajroud sub-basin of Tehran, Iran. Working unit maps were prepared from integrating: A) plant cover, geology and slope B) land-use, geology and slope C) land-use, rocks sensitivit...
متن کاملبررسی صحت و دقت چند روش تهیه نقشه شکل های فرسایش خاک
Erosion features map is one of the basic maps in erosion and sediment studies considered important in watershed management programs. For preparing soil erosion features map (1:250000 scale), a study was conducted in Jajroud sub-basin of Tehran, Iran. Working unit maps were prepared from integrating: A) plant cover, geology and slope B) land-use, geology and slope C) land-use, rocks sensitivit...
متن کاملA Novel Method for Content Base Image Retrieval Using Combination of Local and Global Features
Content-based image retrieval (CBIR) has been an active research topic in the last decade. In this paper we proposed an image retrieval method using global and local features. Firstly, for local features extraction, SURF algorithm produces a set of interest points for each image and a set of 64-dimensional descriptors for each interest points and then to use Bag of Visual Words model, a cluster...
متن کاملA Novel Method for Content Base Image Retrieval Using Combination of Local and Global Features
Content-based image retrieval (CBIR) has been an active research topic in the last decade. In this paper we proposed an image retrieval method using global and local features. Firstly, for local features extraction, SURF algorithm produces a set of interest points for each image and a set of 64-dimensional descriptors for each interest points and then to use Bag of Visual Words model, a cluster...
متن کاملDisciplinary-based (Professional) Information Literacy of Iranian Students in the Academic Year of 2017-2018
Background and Aim: Information literacy is a foundation for all information-related professional skills for preparing them for better performing their careers. This applied survey aimed at determining the rate and level of DIL skills among Iranian students in the academic year 2017-2018. Method: A 20 item researcher-made valid and reliable questionnaire on DIL was prepared by conducting a com...
متن کامل